Notice: the WebSM website has not been updated since the beginning of 2018.

Web Survey Bibliography

Title Setting Priorities: Spurious Differences in Response Rates
Year 2013
Access date 15.04.2014
Full text

pdf (265 KB)

Abstract Despite increasing concern about response rates being insufficient as sole indicators of nonresponse bias and data quality (Groves & Peytcheva, 2008; Keeter, Miller, Kohut, Groves & Presser, 2000), they are still the most prevalent measure for many journals, funding agencies, and survey programs (for alternative indicators, see Wagner, 2012). In addition to overall response rates, research has shown that contact and cooperation need to be considered as separate processes associated with different sample unit characteristics and thus different biases (Lynn & Clarke 2002). While non-contacted persons are more likely to be employed and living an active life style, those who are contacted but do not participate in a survey are more likely to be socially disengaged (Groves & Couper 1998, ch.4,5). Guidelines such as those of the American Association for Public Opinion Research (AAPOR) thus standardize the calculation of contact and cooperation rates in addition to overall response rates. The standardized calculation and reporting of response rates has been a major survey methodological achievement for comparing across surveys and countries various types of response rates (i.e., overall response rates, contact rates, and cooperation rates). Nevertheless, there is one important step that almost all surveys fail to make explicit when reporting response rates: The coding of the sequence of call outcomes (i.e., contact attempts) at a sample unit into a final disposition code for this sample unit.  
Access/Direct link

Homepage (abstract) / (full text)

Year of publication2013
Bibliographic typeJournal article
Print

Web survey bibliography - 2013 (465)

Page:
Page: